1
From Task-Specific AI to General-Purpose Large Language Models
AI012 Lesson 1
00:00

The Paradigm Shift in Artificial Intelligence

1. From Specific to General

The field of AI has undergone a massive transformation in how models are trained and deployed.

  • Old Paradigm (Task-Specific Training): Models like early CNNs or BERT were trained for one specific goal (e.g., Sentiment Analysis only). You needed a different model for translation, summarization, etc.
  • New Paradigm (Centralized Pre-train + Prompt): One massive model (LLM) learns general world knowledge from internet-scale datasets. It can then be directed to perform nearly any linguistic task simply by changing the input prompt.

2. Architectural Evolution

  • Encoder-only (The BERT Era): Focused on understanding and classification. These models read text bidirectionally to grasp deep context but are not designed to generate new text.
  • Decoder-only (The GPT/Llama Era): The modern standard for generative AI. These models use auto-regressive modeling to predict the next word, making them ideal for open-ended generation and conversation.

3. Key Drivers of Change

  • Self-Supervised Learning: Training on vast amounts of unlabeled internet data, removing the bottleneck of human annotation.
  • Scaling Laws: The empirical observation that AI performance scales predictably with model size (parameters), data volume, and compute power.
Key Insight
AI has transitioned from "Task-specific tools" to "General-purpose agents" that exhibit emergent abilities like reasoning and in-context learning.
evolution_comparison.py
TERMINAL bash — 80x24
> Ready. Click "Run" to execute.
>
Question 1
What is the primary difference between the "Old Paradigm" and the "New Paradigm" of AI?
Moving from cloud computing to local processing.
Moving from task-specific training to centralized pre-training with prompting.
Moving from Python to C++ for model development.
Moving from Decoder-only to Encoder-only architectures.
Question 2
According to Scaling Laws, what three factors fundamentally link to model performance?
Internet speed, RAM size, and CPU cores.
Human annotators, code efficiency, and server location.
Model size (parameters), data volume (tokens), and total computation.
Prompt length, temperature setting, and top-k value.
Challenge: Evaluating Architectural Fitness
Apply your knowledge of model architectures to real-world scenarios.
You are an AI architect tasked with selecting the right foundational approach for two different projects. You must choose between an Encoder-only (like BERT) or a Decoder-only (like GPT) architecture.
Task 1
You are building a system that only needs to classify incoming emails as "Spam" or "Not Spam" based on the entire context of the message. Which architecture is more efficient for this narrow task?
Solution: Encoder-only (e.g., BERT)

Because the task is classification and requires deep, bidirectional understanding of the text without needing to generate new text, an Encoder-only model is highly efficient and appropriate.
Task 2
You are building a creative writing assistant that helps authors brainstorm ideas and write the next paragraph of their story. Which architecture is the modern standard for this?
Solution: Decoder-only (e.g., GPT/Llama)

This task requires open-ended text generation. Decoder-only models are designed specifically for auto-regressive next-token prediction, making them the standard for generative AI applications.